脾脏是钝性腹腔创伤中最常见的固体器官之一。来自多相CT的自动分割系统的开发用于脾血管损伤的脾血管损伤,可以增强严重程度,以改善临床决策支持和结果预测。然而,由于以下原因,脾血管损伤的准确细分是具有挑战性的:1)脾血管损伤可以是高度变体的形状,质地,尺寸和整体外观; 2)数据采集是一种复杂和昂贵的程序,需要来自数据科学家和放射科学家的密集努力,这使得大规模的注释数据集难以获取。鉴于这些挑战,我们在此设计了一种用于多相脾血管损伤分割的新框架,尤其是数据有限。一方面,我们建议利用外部数据作为矿井伪脾面罩作为空间关注,被称为外部关注,用于引导脾血管损伤的分割。另一方面,我们开发一个合成相位增强模块,它在生成的对抗网络上构建,通过完全利用不同阶段之间的关系来填充内部数据。通过联合实施外部注意力和填充内部数据表示,我们提出的方法优于其他竞争方法,并且在平均DSC方面大大改善了超过7%的流行Deeplab-V3 +基线,这证实了其有效性。
translated by 谷歌翻译
我们提出了空间感知内存队列,用于从放射线照相图像中的内绘和检测异常(缩写为鱿鱼)。放射造影成像协议专注于特定的身体区域,因此在患者中产生具有良好相似性和产生复发解剖结构的图像。要利用此结构化信息,我们的鱿鱼包括一个新的内存队列和特征空间中的新型内绘制块。我们表明鱿鱼可以将根深蒂固的解剖结构分类为复发模式;在推理中,鱿鱼可以识别图像中的异常(看不见的图案)。鱿鱼在两个胸部X射线基准数据集上超过5点以上的未经监督异常检测到现有技术。此外,我们已经创建了一个新的数据集(Digitanatomy),其在胸部解剖学中合成空间相关和一致的形状。我们希望Digitanatomy可以促使异常检测方法的开发,评估和解释性,特别是用于射线照相成像。
translated by 谷歌翻译
深度学习的成功严重依赖于具有广泛标签的大型和多样化的数据集,但我们常常只能访问与部分标签相关的几个小型数据集。在本文中,我们开始了一个新的倡议,“数据组装”,旨在释放来自公共数据集的组装部分标记数据的全部潜力。具体而言,我们介绍了一种新的动态适配器来编码不同的视觉任务,这解决了无与伦比,异构,甚至相互冲突的标记协议的挑战。我们还使用伪标记和一致性约束来利用缺少标签的数据并在数据集中减轻域间隙。从严格的评估到三个自然成像和六个医学成像任务,我们发现从“消极例子”的学习促进了感兴趣的课程的分类和分割。这揭示了新的光线对稀有疾病和新兴淫荡的计算机辅助诊断,其中“正示例”难以收集,但“负例”相对容易组装。除了超过Chestxray基准中的现有技术外,我们的模型对于识别少数群体类别的疾病特别强烈,平均屈服于3点的改善。值得注意的是,在使用现有的部分标签时,我们的模型性能与使用完整标签的符合额度,无需额外的40%的注释成本。代码将在https://github.com/mrgiovanni/dataAsseMble提供。
translated by 谷歌翻译
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
translated by 谷歌翻译
Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms averageand max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part,
translated by 谷歌翻译
Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called "semantic image segmentation"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our "DeepLab" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
translated by 谷歌翻译
This paper presents a simple and effective visual prompting method for adapting pre-trained models to downstream recognition tasks. Our method includes two key designs. First, rather than directly adding together the prompt and the image, we treat the prompt as an extra and independent learnable component. We show that the strategy of reconciling the prompt and the image matters, and find that warping the prompt around a properly shrinked image empirically works the best. Second, we re-introduce two "old tricks" commonly used in building transferable adversarial examples, i.e., input diversity and gradient normalization, into visual prompting. These techniques improve optimization and enable the prompt to generalize better. We provide extensive experimental results to demonstrate the effectiveness of our method. Using a CLIP model, our prompting method sets a new record of 82.8% average accuracy across 12 popular classification datasets, substantially surpassing the prior art by +5.6%. It is worth noting that this prompting performance already outperforms linear probing by +2.1% and can even match fully fine-tuning in certain datasets. In addition, our prompting method shows competitive performance across different data scales and against distribution shifts. The code is publicly available at https://github.com/UCSC-VLAA/EVP.
translated by 谷歌翻译
The weakly supervised instance segmentation is a challenging task. The existing methods typically use bounding boxes as supervision and optimize the network with a regularization loss term such as pairwise color affinity loss for instance segmentation. Through systematic analysis, we found that the commonly used pairwise affinity loss has two limitations: (1) it works with color affinity but leads to inferior performance with other modalities such as depth gradient, (2)the original affinity loss does not prevent trivial predictions as intended but actually accelerates this process due to the affinity loss term being symmetric. To overcome these two limitations, in this paper, we propose a novel asymmetric affinity loss which provides the penalty against the trivial prediction and generalizes well with affinity loss from different modalities. With the proposed asymmetric affinity loss, our method outperforms the state-of-the-art methods on the Cityscapes dataset and outperforms our baseline method by 3.5% in mask AP.
translated by 谷歌翻译